Convergence, Rank Reduction and Bounds for the Stationary Analysis of Markov Chains
نویسنده
چکیده
Peng, Bin. Convergence, Rank Reduction and Bounds for the Stationary Analysis of Markov Chains (Under the direction of William J. Stewart). With existing numerical methods, the computation of stationary distributions for large Markov chains is still time-consuming, a direct result of the state explosion problem. In this thesis, we introduce a rank reduction method for computing stationary distributions of Markov chains for which low-rank iteration matrices can be formed. We first prove that, for an irreducible Markov chain, a necessary and sufficient condition for convergence in a single iteration is that the iteration matrix have rank one. Since most iteration matrices have rank larger than 1, we also consider the Wedderburn rank-1 reduction formula and develop a rank reduction procedure that takes an initial iteration matrix with rank greater than one and modifies it in successive steps, under the constraint that the exact solution be preserved at each step, until a rank-1 iteration matrix is obtained. When the iteration matrix has rank r, the proposed algorithm has time complexity O(r2n). Secondly we investigate the relationship among lumpability, weak lumpability, quasi-lumpability and near complete decomposability. These concepts are important in aggregating and disaggregating Markov chains. White’s algorithm for identifying all possible lumpable partitions for Markov chains is improved by incorporating lumpability tests on special state orderings. Finally, instead of computing exact stationary distributions, we design stochastic-ordering-based techniques to bound them. Upper bounds can be obtained by using constructive algorithms developed recently. We observe that the more lumpable partitionings, the more accurate the upper bound for the state of interest both with matrix transformation and without matrix transformation. Lastly we combine the approaches of state permutation, matrix transformation and state partitioning to improve the quality of the upper bound for the state of interest. Convergence, Rank Reduction and Bounds for the Stationary Analysis of Markov Chains by Bin Peng A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy OPERATIONS RESEARCH Raleigh 2004 APPROVED BY: Chair of Advisory Committee Biography Born in Hunan province, China, Bin Peng is one of two brothers and the elder son of Yuanchong Peng and Shengying Mo, both of whom are traditional Chinese farmers. Bin’s only younger brother Ru was injured and suffered severe nerve damage in a coal mining accident in 1999 and disastrously lost forever his ability to walk and move. His parents are patiently taking care of him day after day, month after month and year after year. In July of 1997, Bin Peng graduated from the Department of Mathematics, Xiangtan University in Hunan province, China with a B.S. degree in Economics. He pursued his graduate study in the Department of Statistics & Operations Research at Fudan University (Shanghai, China) and was awarded a Master of Science degree in Operations Research in 2000. Thereafter, he came to North Carolina State University to continue his studies toward his doctoral degree in Operations Research. Bin Peng was married to Manhong Chai in 2000 and they have a son, Maxwell.
منابع مشابه
Relative Entropy Rate between a Markov Chain and Its Corresponding Hidden Markov Chain
In this paper we study the relative entropy rate between a homogeneous Markov chain and a hidden Markov chain defined by observing the output of a discrete stochastic channel whose input is the finite state space homogeneous stationary Markov chain. For this purpose, we obtain the relative entropy between two finite subsequences of above mentioned chains with the help of the definition of...
متن کاملSubgeometric rates of convergence of f -ergodic Markov chains
We study bounds on the rate of convergence of aperiodic Markov chains on a general state space to the stationary distribution. Our results generalize previous results on convergence rates for Markov chains [23]. We also improve results from [9] on convergence rates in the local renewal theorem. The results are applied to delayed random walks.
متن کاملConvergence of ODE approximations and bounds on performance models in the steady-state
We present a limiting convergence result for differential equation approximations of continuous-time Markovian performance models in the stationary (steady-state) regime. This extends existing results for convergence up to some finite time. We show how, for a large class of performance models, this result can be inexpensively exploited to make strong statements about the stationary behaviour of...
متن کاملGeometric Convergence Rates for Time-Sampled Markov Chains
We consider time-sampled Markov chain kernels, of the form P = ∑ n μ{n}P. We prove bounds on the total variation distance to stationarity of such chains. We are motivated by the analysis of nearperiodic MCMC algorithms.
متن کاملShift-coupling and convergence rates of ergodic averages
We study convergence of Markov chains {Xk} to their stationary distributions π(·). Much recent work has used coupling to get quantitative bounds on the total variation distance between the law L(Xn) and π(·). In this paper, we use shift-coupling to get quantitative bounds on the total variation distance between the ergodic average law 1 n n ∑ k=1 L(Xk) and π(·). This avoids certain problems, re...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004